40 research outputs found

    Predictability Issues in Recommender Systems Based on Web Usage Behavior towards Robust Collaborative Filtering

    Get PDF
    This paper examines the effect of Recommender Systems in security oriented issues. Currently research has begun to evaluate the vulnerabilities and robustness of various collaborative recommender techniques in the face of profile injection and shilling attacks. Standard collaborative filtering algorithms are vulnerable to attacks. The robustness of recommender system and the impact of attacks are well suited this study and examined in this paper. The predictability issues and the various attack strategies are also discussed. Based on KNN the robustness of the recommender system were examined and the sensitivity of the rating given by the users are also analyzed. Furthermore the robust PLSA also considered for the work

    Extending UPnP for Application Interoperability in a Home Network

    Get PDF
    The Universal Plug and Play (UPnP) technology offers pervasive communication across heterogeneous devices in a home or small office network. The UPnP specifications are available for devices only to be interoperable together in a home or small office network. This paper proposes an extension of the UPnP technology for application interoperability in a home or small office network. This paper provides an UPnP Application Architecture as an extension to the existing UPnP Device Architecture. This extension enhances the feature of UPnP from device interoperability to application interoperability which enables the applications to discover, control and share data with each other in a home or small office network despite of their device type and operating system. In addition to the UPnP Application Architecture, the UPnP Application Template and UPnP Application Service Template are defined towards the development of UPnP-enabled applications that run on heterogeneous devices in a home or small office network

    A Prelimanary Exploration on component based software engineering

    Full text link
    Component-based software development (CBD) is a methodology that has been embraced by the software industry to accelerate development, save costs and timelines, minimize testing requirements, and boost quality and output. Compared to the conventional software development approach, this led to the system's development being completed more quickly. By choosing components, identifying systems, and evaluating those systems, CBSE contributes significantly to the software development process. The objective of CBSE is to codify and standardize all disciplines that support CBD-related operations. Analysis of the comparison between component-based and scripting technologies reveals that, in terms of qualitative performance, component-based technologies scale more effectively. Further study and application of CBSE are directly related to the CBD approach's success. This paper explores the introductory concepts and comparative analysis related to component-based software engineering which have been around for a while, but proper adaption of CBSE are still lacking issues are also focused

    Automatic Ontology Creation by Extracting Metadata from the Source code

    Get PDF
    Semantic Web can be created by developing Ontologies. For every new project Software companies are going for designing new code and components, by new developers. If the company archives the completed code and components, which can be used with no need for testing it unlike open source code and components. File metadata and file content metadata can be extracted from the Application files and folders using API’s. The extracted components can be stored in the Hadoop Distributed File System along with the application environment. Extracted metadata will be in XML format. XML deals with syntactic level and the Web Ontology Language (OWL) supports semantic level for the representation of domain knowledge using classes, properties and instances. This paper converts the data model elements of XML to OWL Ontology that implements the mapping the standard XML technology XSLT

    A continuous impingement mixing process for effective dispersion of nanoparticles in polymers

    Get PDF
    Mixing refers to any process that increases the uniformity of composition and is an integral part of polymer processing. The effective mixing of nanoparticles into polymers continues to be one of the leading problems that limit large scale production of polymer nanocomposites. Impingement mixing is a novel, relatively simple, continuous flow mixing process wherein mixing is accomplished by immersing a high velocity jet in a slower co-flowing stream. The resulting recirculating flow produces an energy cascade that provides a wide range of length scales for efficient mixing. An impingement mixing process was developed and studied through experiments and simulations. Numerical simulations were conducted using FLUENT to understand better the mechanism of operation of the mixer. The formation of a recirculation zone was found to affect the dispersion of nanoparticles. Results of the simulations were compared with experimental data obtained under similar conditions. While this process may be used for any polymernanoparticle combination, the primary focus of this study was the dispersion of Single Walled Carbon Nanotubes (SWNTs) in an epoxy matrix. The dispersion of SWNTs was evaluated by analyzing SEM images of the composites. The image analysis technique used the concept of Shannon Entropy to obtain an index of dispersion that was representative of the degree of mixing. This method of obtaining a dispersion index can be applied to any image analysis technique in which the two components that make up the mixture can be clearly distinguished. The mixing process was also used to disperse SWNTs into a limited number of other polymers. The mixing process is an "enabling" process that may be employed for virtually any polymer-nanoparticle combination. This mixing process was shown to be an effective and efficient means of quickly dispersing nanoparticles in polymers

    Optimistic Opportunistic Routing Techniques for Wireless Sensor Networks: -A Review

    Get PDF
    Abstract: Secure and optimistic routing is very important and significant task in Wireless Sensor Networks (WSN). In multi-hop communication, selection of routing path between the sensor nodes in the sensor field is essential and also important. Based on the coordination, time based, token based and network coding based techniques the data packets are forwarded from the source to the destination through the intermediate nodes in the network called Opportunistic Routing (OR). Compared with traditional routing, the OR uses the broadcast nature of transmission which greatly increases the wireless network throughput, reliability. In order to support higher node density in WSN, the selection of the most optimistic, flexible, dynamic and reliable OR mechanism and OR protocol are important. In this study the various OR mechanisms and OR protocol are identified and design issues like delivery ratio, packet transmission rate, communication pattern, reliability rate, throughput and fault tolerance are discussed and the comparative results are also tabulated

    Metabolic adaptation of two in silico mutants of Mycobacterium tuberculosis during infection

    Get PDF
    ABSTRACT: Background: Up to date, Mycobacterium tuberculosis (Mtb) remains as the worst intracellular killer pathogen. To establish infection, inside the granuloma, Mtb reprograms its metabolism to support both growth and survival, keeping a balance between catabolism, anabolism and energy supply. Mtb knockouts with the faculty of being essential on a wide range of nutritional conditions are deemed as target candidates for tuberculosis (TB) treatment. Constraint-based genome-scale modeling is considered as a promising tool for evaluating genetic and nutritional perturbations on Mtb metabolic reprogramming. Nonetheless, few in silico assessments of the effect of nutritional conditions on Mtb’s vulnerability and metabolic adaptation have been carried out. Results: A genome-scale model (GEM) of Mtb, modified from the H37Rv iOSDD890, was used to explore the metabolic reprogramming of two Mtb knockout mutants (pfkA- and icl-mutants), lacking key enzymes of central carbon metabolism, while exposed to changing nutritional conditions (oxygen, and carbon and nitrogen sources). A combination of shadow pricing, sensitivity analysis, and flux distributions patterns allowed us to identify metabolic behaviors that are in agreement with phenotypes reported in the literature. During hypoxia, at high glucose consumption, the Mtb pfkA-mutant showed a detrimental growth effect derived from the accumulation of toxic sugar phosphate intermediates (glucose-6-phosphate and fructose-6-phosphate) along with an increment of carbon fluxes towards the reductive direction of the tricarboxylic acid cycle (TCA). Furthermore, metabolic reprogramming of the icl-mutant (icl1&icl2) showed the importance of the methylmalonyl pathway for the detoxification of propionyl-CoA, during growth at high fatty acid consumption rates and aerobic conditions. At elevated levels of fatty acid uptake and hypoxia, we found a drop in TCA cycle intermediate accumulation that might create redox imbalance. Finally, findings regarding Mtb-mutant metabolic adaptation associated with asparagine consumption and acetate, succinate and alanine production, were in agreement with literature reports. Conclusions: This study demonstrates the potential application of genome-scale modeling, flux balance analysis (FBA), phenotypic phase plane (PhPP) analysis and shadow pricing to generate valuable insights about Mtb metabolic reprogramming in the context of human granulomas

    Ontology merging and matching using ontology abstract machine

    Get PDF
    Ontology mediation is enabled through interoperability of semantic data sources.It helps data sharing between heterogeneous knowledgebase and reuse by semantic applications.Ontology mediation includes operations such as, mapping, alignment, matching, merging and integration.This paper aims at discussing a new approach of ontology merging and matching using ontology abstract machine with an illustration from a health care domain
    corecore